Serveur d'exploration Santé et pratique musicale

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Applying Deep Learning Techniques to Estimate Patterns of Musical Gesture.

Identifieur interne : 000362 ( Main/Exploration ); précédent : 000361; suivant : 000363

Applying Deep Learning Techniques to Estimate Patterns of Musical Gesture.

Auteurs : David Dalmazzo [Espagne] ; George Waddell [Royaume-Uni] ; Rafael Ramírez [Espagne]

Source :

RBID : pubmed:33469435

Abstract

Repetitive practice is one of the most important factors in improving the performance of motor skills. This paper focuses on the analysis and classification of forearm gestures in the context of violin playing. We recorded five experts and three students performing eight traditional classical violin bow-strokes: martelé, staccato, detaché, ricochet, legato, trémolo, collé, and col legno. To record inertial motion information, we utilized the Myo sensor, which reports a multidimensional time-series signal. We synchronized inertial motion recordings with audio data to extract the spatiotemporal dynamics of each gesture. Applying state-of-the-art deep neural networks, we implemented and compared different architectures where convolutional neural networks (CNN) models demonstrated recognition rates of 97.147%, 3DMultiHeaded_CNN models showed rates of 98.553%, and rates of 99.234% were demonstrated by CNN_LSTM models. The collected data (quaternion of the bowing arm of a violinist) contained sufficient information to distinguish the bowing techniques studied, and deep learning methods were capable of learning the movement patterns that distinguish these techniques. Each of the learning algorithms investigated (CNN, 3DMultiHeaded_CNN, and CNN_LSTM) produced high classification accuracies which supported the feasibility of training classifiers. The resulting classifiers may provide the foundation of a digital assistant to enhance musicians' time spent practicing alone, providing real-time feedback on the accuracy and consistency of their musical gestures in performance.

DOI: 10.3389/fpsyg.2020.575971
PubMed: 33469435
PubMed Central: PMC7813937


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Applying Deep Learning Techniques to Estimate Patterns of Musical Gesture.</title>
<author>
<name sortKey="Dalmazzo, David" sort="Dalmazzo, David" uniqKey="Dalmazzo D" first="David" last="Dalmazzo">David Dalmazzo</name>
<affiliation wicri:level="4">
<nlm:affiliation>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.</nlm:affiliation>
<country xml:lang="fr">Espagne</country>
<wicri:regionArea>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona</wicri:regionArea>
<placeName>
<settlement type="city">Barcelone</settlement>
<region nuts="2" type="region">Catalogne</region>
</placeName>
<orgName type="university">Université Pompeu Fabra</orgName>
</affiliation>
</author>
<author>
<name sortKey="Waddell, George" sort="Waddell, George" uniqKey="Waddell G" first="George" last="Waddell">George Waddell</name>
<affiliation wicri:level="3">
<nlm:affiliation>Centre for Performance Science, Royal College of Music, London, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Centre for Performance Science, Royal College of Music, London</wicri:regionArea>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
<affiliation wicri:level="3">
<nlm:affiliation>Faculty of Medicine, Imperial College London, London, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Faculty of Medicine, Imperial College London, London</wicri:regionArea>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Ramirez, Rafael" sort="Ramirez, Rafael" uniqKey="Ramirez R" first="Rafael" last="Ramírez">Rafael Ramírez</name>
<affiliation wicri:level="4">
<nlm:affiliation>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.</nlm:affiliation>
<country xml:lang="fr">Espagne</country>
<wicri:regionArea>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona</wicri:regionArea>
<placeName>
<settlement type="city">Barcelone</settlement>
<region nuts="2" type="region">Catalogne</region>
</placeName>
<orgName type="university">Université Pompeu Fabra</orgName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2020">2020</date>
<idno type="RBID">pubmed:33469435</idno>
<idno type="pmid">33469435</idno>
<idno type="doi">10.3389/fpsyg.2020.575971</idno>
<idno type="pmc">PMC7813937</idno>
<idno type="wicri:Area/Main/Corpus">000046</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Corpus" wicri:corpus="PubMed">000046</idno>
<idno type="wicri:Area/Main/Curation">000046</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Curation">000046</idno>
<idno type="wicri:Area/Main/Exploration">000046</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Applying Deep Learning Techniques to Estimate Patterns of Musical Gesture.</title>
<author>
<name sortKey="Dalmazzo, David" sort="Dalmazzo, David" uniqKey="Dalmazzo D" first="David" last="Dalmazzo">David Dalmazzo</name>
<affiliation wicri:level="4">
<nlm:affiliation>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.</nlm:affiliation>
<country xml:lang="fr">Espagne</country>
<wicri:regionArea>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona</wicri:regionArea>
<placeName>
<settlement type="city">Barcelone</settlement>
<region nuts="2" type="region">Catalogne</region>
</placeName>
<orgName type="university">Université Pompeu Fabra</orgName>
</affiliation>
</author>
<author>
<name sortKey="Waddell, George" sort="Waddell, George" uniqKey="Waddell G" first="George" last="Waddell">George Waddell</name>
<affiliation wicri:level="3">
<nlm:affiliation>Centre for Performance Science, Royal College of Music, London, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Centre for Performance Science, Royal College of Music, London</wicri:regionArea>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
<affiliation wicri:level="3">
<nlm:affiliation>Faculty of Medicine, Imperial College London, London, United Kingdom.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Faculty of Medicine, Imperial College London, London</wicri:regionArea>
<placeName>
<settlement type="city">Londres</settlement>
<region type="country">Angleterre</region>
<region type="région" nuts="1">Grand Londres</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Ramirez, Rafael" sort="Ramirez, Rafael" uniqKey="Ramirez R" first="Rafael" last="Ramírez">Rafael Ramírez</name>
<affiliation wicri:level="4">
<nlm:affiliation>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.</nlm:affiliation>
<country xml:lang="fr">Espagne</country>
<wicri:regionArea>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona</wicri:regionArea>
<placeName>
<settlement type="city">Barcelone</settlement>
<region nuts="2" type="region">Catalogne</region>
</placeName>
<orgName type="university">Université Pompeu Fabra</orgName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in psychology</title>
<idno type="ISSN">1664-1078</idno>
<imprint>
<date when="2020" type="published">2020</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Repetitive practice is one of the most important factors in improving the performance of motor skills. This paper focuses on the analysis and classification of forearm gestures in the context of violin playing. We recorded five experts and three students performing eight traditional classical violin bow-strokes:
<i>martelé, staccato, detaché, ricochet, legato, trémolo, collé</i>
, and
<i>col legno</i>
. To record inertial motion information, we utilized the
<i>Myo</i>
sensor, which reports a multidimensional time-series signal. We synchronized inertial motion recordings with audio data to extract the spatiotemporal dynamics of each gesture. Applying state-of-the-art deep neural networks, we implemented and compared different architectures where convolutional neural networks (CNN) models demonstrated recognition rates of 97.147%, 3DMultiHeaded_CNN models showed rates of 98.553%, and rates of 99.234% were demonstrated by CNN_LSTM models. The collected data (quaternion of the bowing arm of a violinist) contained sufficient information to distinguish the bowing techniques studied, and deep learning methods were capable of learning the movement patterns that distinguish these techniques. Each of the learning algorithms investigated (CNN, 3DMultiHeaded_CNN, and CNN_LSTM) produced high classification accuracies which supported the feasibility of training classifiers. The resulting classifiers may provide the foundation of a digital assistant to enhance musicians' time spent practicing alone, providing real-time feedback on the accuracy and consistency of their musical gestures in performance.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="PubMed-not-MEDLINE" Owner="NLM">
<PMID Version="1">33469435</PMID>
<DateRevised>
<Year>2021</Year>
<Month>01</Month>
<Day>22</Day>
</DateRevised>
<Article PubModel="Electronic-eCollection">
<Journal>
<ISSN IssnType="Print">1664-1078</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>11</Volume>
<PubDate>
<Year>2020</Year>
</PubDate>
</JournalIssue>
<Title>Frontiers in psychology</Title>
<ISOAbbreviation>Front Psychol</ISOAbbreviation>
</Journal>
<ArticleTitle>Applying Deep Learning Techniques to Estimate Patterns of Musical Gesture.</ArticleTitle>
<Pagination>
<MedlinePgn>575971</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.3389/fpsyg.2020.575971</ELocationID>
<Abstract>
<AbstractText>Repetitive practice is one of the most important factors in improving the performance of motor skills. This paper focuses on the analysis and classification of forearm gestures in the context of violin playing. We recorded five experts and three students performing eight traditional classical violin bow-strokes:
<i>martelé, staccato, detaché, ricochet, legato, trémolo, collé</i>
, and
<i>col legno</i>
. To record inertial motion information, we utilized the
<i>Myo</i>
sensor, which reports a multidimensional time-series signal. We synchronized inertial motion recordings with audio data to extract the spatiotemporal dynamics of each gesture. Applying state-of-the-art deep neural networks, we implemented and compared different architectures where convolutional neural networks (CNN) models demonstrated recognition rates of 97.147%, 3DMultiHeaded_CNN models showed rates of 98.553%, and rates of 99.234% were demonstrated by CNN_LSTM models. The collected data (quaternion of the bowing arm of a violinist) contained sufficient information to distinguish the bowing techniques studied, and deep learning methods were capable of learning the movement patterns that distinguish these techniques. Each of the learning algorithms investigated (CNN, 3DMultiHeaded_CNN, and CNN_LSTM) produced high classification accuracies which supported the feasibility of training classifiers. The resulting classifiers may provide the foundation of a digital assistant to enhance musicians' time spent practicing alone, providing real-time feedback on the accuracy and consistency of their musical gestures in performance.</AbstractText>
<CopyrightInformation>Copyright © 2021 Dalmazzo, Waddell and Ramírez.</CopyrightInformation>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Dalmazzo</LastName>
<ForeName>David</ForeName>
<Initials>D</Initials>
<AffiliationInfo>
<Affiliation>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Waddell</LastName>
<ForeName>George</ForeName>
<Initials>G</Initials>
<AffiliationInfo>
<Affiliation>Centre for Performance Science, Royal College of Music, London, United Kingdom.</Affiliation>
</AffiliationInfo>
<AffiliationInfo>
<Affiliation>Faculty of Medicine, Imperial College London, London, United Kingdom.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Ramírez</LastName>
<ForeName>Rafael</ForeName>
<Initials>R</Initials>
<AffiliationInfo>
<Affiliation>Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2021</Year>
<Month>01</Month>
<Day>05</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>Switzerland</Country>
<MedlineTA>Front Psychol</MedlineTA>
<NlmUniqueID>101550902</NlmUniqueID>
<ISSNLinking>1664-1078</ISSNLinking>
</MedlineJournalInfo>
<KeywordList Owner="NOTNLM">
<Keyword MajorTopicYN="N">CNN</Keyword>
<Keyword MajorTopicYN="N">CNN_LSTM</Keyword>
<Keyword MajorTopicYN="N">ConvLSTM</Keyword>
<Keyword MajorTopicYN="N">LSTM</Keyword>
<Keyword MajorTopicYN="N">bow-strokes</Keyword>
<Keyword MajorTopicYN="N">gesture recognition</Keyword>
<Keyword MajorTopicYN="N">music education</Keyword>
<Keyword MajorTopicYN="N">music interaction</Keyword>
</KeywordList>
<CoiStatement>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</CoiStatement>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2020</Year>
<Month>06</Month>
<Day>24</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2020</Year>
<Month>11</Month>
<Day>23</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2021</Year>
<Month>1</Month>
<Day>20</Day>
<Hour>5</Hour>
<Minute>54</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2021</Year>
<Month>1</Month>
<Day>21</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2021</Year>
<Month>1</Month>
<Day>21</Day>
<Hour>6</Hour>
<Minute>1</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>epublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">33469435</ArticleId>
<ArticleId IdType="doi">10.3389/fpsyg.2020.575971</ArticleId>
<ArticleId IdType="pmc">PMC7813937</ArticleId>
</ArticleIdList>
<ReferenceList>
<Reference>
<Citation>PLoS One. 2017 Oct 12;12(10):e0186132</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">29023548</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul;2018:1-4</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">30440301</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>IEEE Trans Neural Syst Rehabil Eng. 2019 Apr;27(4):760-771</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">30714928</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Sensors (Basel). 2020 Jan 26;20(3):</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">31991849</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>IEEE Trans Biomed Eng. 2017 Mar;64(3):621-628</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">28113209</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>PLoS One. 2017 Feb 1;12(2):e0169649</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">28146576</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Front Psychol. 2019 Mar 04;10:344</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">30886595</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Sensors (Basel). 2020 Mar 24;20(6):</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">32214039</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Sensors (Basel). 2016 Jan 18;16(1):</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">26797612</ArticleId>
</ArticleIdList>
</Reference>
</ReferenceList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>Espagne</li>
<li>Royaume-Uni</li>
</country>
<region>
<li>Angleterre</li>
<li>Catalogne</li>
<li>Grand Londres</li>
</region>
<settlement>
<li>Barcelone</li>
<li>Londres</li>
</settlement>
<orgName>
<li>Université Pompeu Fabra</li>
</orgName>
</list>
<tree>
<country name="Espagne">
<region name="Catalogne">
<name sortKey="Dalmazzo, David" sort="Dalmazzo, David" uniqKey="Dalmazzo D" first="David" last="Dalmazzo">David Dalmazzo</name>
</region>
<name sortKey="Ramirez, Rafael" sort="Ramirez, Rafael" uniqKey="Ramirez R" first="Rafael" last="Ramírez">Rafael Ramírez</name>
</country>
<country name="Royaume-Uni">
<region name="Angleterre">
<name sortKey="Waddell, George" sort="Waddell, George" uniqKey="Waddell G" first="George" last="Waddell">George Waddell</name>
</region>
<name sortKey="Waddell, George" sort="Waddell, George" uniqKey="Waddell G" first="George" last="Waddell">George Waddell</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Sante/explor/SanteMusiqueV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000362 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000362 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Sante
   |area=    SanteMusiqueV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:33469435
   |texte=   Applying Deep Learning Techniques to Estimate Patterns of Musical Gesture.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:33469435" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a SanteMusiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Mon Mar 8 15:23:44 2021. Site generation: Mon Mar 8 15:23:58 2021